Welcome to the Realistic Graphics and Imaging group in the Department of Computing at Imperial College London. We conduct research in realistic computer graphics spanning acquisition, modeling and rendering of real world materials, objects and scenes, as well as imaging for graphics and vision including computational photography and illumination. We are affiliated to the Visual Computing research theme within DOC.
Menu
Projects
-
Acquiring axially-symmetric transparent objects using single-view transmission imaging
[+] moreWe propose a novel, practical solution for high quality reconstruction of axially-symmetric transparent objects such as glasses, tumblers, goblets, carafes, etc., using single-view transmission imaging of a few patterns emitted from a background LCD panel. Our approach employs inverse ray tracing to reconstruct both completely symmetric as well as more complex n-fold symmetric everyday transparent objects.
-
Acquiring Spatially Varying Appearance of Printed Holographic Surfaces
[+] moreWe present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orientation over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time.
-
Deep Polarization 3D Imaging
[+] moreWe present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues. We couple polarization imaging with deep learning to achieve high quality estimate of 3D object shape (surface normals and depth) and SVBRDF using single-view polarization imaging under frontal flash illumination.
-
Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging
[+] moreWe present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D
objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide-angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method. -
Desktop-based High Quality Facial Capture
[+] moreWe present a novel desktop-based system for high-quality facial capture including geometry and facial appearance. The proposed acquisition system is highly practical and scalable, consisting purely of commodity components. The setup consists of a set of displays for controlled illumination for reflectance capture, in conjunction with multiview acquisition of facial geometry.
-
Diffuse-Specular Separation using Binary Spherical Gradient Illumination
[+] moreWe introduce a novel method for view-independent diffuse-specular separation of albedo and photometric normals without requiring polarization using binary spherical gradient illumination. The method does not impose restrictions on viewpoints and requires fewer photographs for multiview acquisition than polarized spherical
gradient illumination. -
Efficient surface diffraction renderings with Chebyshev approximations
[+] moreWe propose an efficient method for reproducing diffraction colours on natural surfaces with complex nanostructures that can be represented as height-fields. Our method employs Chebyshev approximations to accurately model view-dependent iridescences for such a surface into its spectral bidirectional reflectance distribution function (BRDF). As main contribution, our method significantly reduces the runtime memory footprint from precomputed lookup tables without compromising photorealism.
-
High Quality Neural Relighting using Practical Zonal Illumination
[+] moreWe present a method for high-quality image-based relighting using a practical limited zonal illumination field. We employ a set of desktop monitors to illuminate a subject from a near-hemispherical zone and record One-Light-At-A-Time (OLAT) images from multiple viewpoints. We further extrapolate sampling of incident illumination directions beyond the frontal coverage of the monitors by repeating OLAT captures with the subject rotation in relation to the capture setup. Finally, we train our proposed skip-assisted autoencoder and latent diffusion based generative method to learn a high-quality continuous representation of the reflectance function without requiring explicit alignment of the data captured from various viewpoints. This method enables smooth lighting animation for high-frequency reflectance functions and effectively manages to extend incident lighting beyond the practical capture setup’s illumination zone.
-
High-Quality Facial Geometry from Sparse Heterogeneous Cameras under Active Illumination
[+] moreWe present a geometry reconstruction method tailored for an active-illumination facial capture setup featuring sparse cameras with varying characteristics. Our technique builds upon hash-encoded neural surface reconstruction, which we enhance with additional active-illumination-based supervision and loss functions, allowing us to maintain high reconstruction speed and geometrical fidelity even with reduced camera coverage.
-
Image-Based Relighting using Room Lighting Basis
[+] moreWe present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage. We believe our approach can be applied for practical relighting applications with general studio lighting.